How do organizations improve? There is a vast literature on organizational analysis, learning, and adaptivity. Herb Simon, in his book Administrative Behavior,1 writes that individuals decide to participate in an organization because: 1) doing so contributes to their personal objectives, 2) they’re invested in the organization’s growth, and 3) they receive remuneration from the organization. Katz and Kahn2 write that organizations are open systems that take energy in from the environment, and transform that energy. Some of the energy is spent on system maintenance, while other energy is spent on growth, outputs, and organizational learning. Psychologists, sociologists, and others may point to other sources of organizational success. Perhaps there are habits of highly successful companies,3 or maybe CEO leadership matters a lot.4 Many factors surely contribute to organizational growth and survival.
In this project, we imagine an organization that mostly relies on human capital to produce outputs. Individuals working in the organization work individually on tasks, and sometimes they make mistakes. The company wants to reduce mistakes, because mistakes take away from the company’s overall level of productivity. But mistakes aren’t the only input into productivity. Trust also matters. We think of trust as representative of an organization’s culture. If people trust each other, they are able to be more productive; perhaps they like each other, and are better able to collaborate and work on new ideas together. The absence of trust, even in an organization that makes no mistakes, will lead to an unproductive company. Much to the chagrin of those who professed scientific management at the start of the 20th century,5 humans are not machines. To be productive requires both competence and an organizational culture that allows individuals to work together.
Thus, we propose an agent-based model to find a theoretical solution to explore the influence of trust and learning on organizational performance. Our model focuses on how error reporting at the individual level influence the firm-level performance through learning and trust reduction mechanisms. We model an organization composed of n individuals, who all have baseline productivity from a uniform distribution between 0 and 1 (\(p\)~\(U_{[0,1]}\).) and baseline trust t. The organization encourages their employees to report errors made by their colleagues by a reward b.
In each time period, the model runs in 4 steps (see Figure 1). First, agents work with a probability \(p_m\) of making a mistake. The mistake might be observed by colleagues with a probability of \(p_o\). Second, agents observe colleagues and notice errors made by their colleagues. Third, agents make a decision on whether to report on their colleagues’ mistakes based on a utility function \(U(\mathtt{reporting}) = b \times (1-\mathtt{trust})\). We use this utility function under the naive belief that, in high trust environments, individuals are more forgiving of others, while in low trust environments, they may seek easy “wins” against their coworkers. That utility function is then translated into a probability, between 0 and 1. Last, agents will be notified whether they got reported. If they have been reported, they will reduce their trust by r, and at the same time reduce their error rate by l.
Figure 1: Model Iteration
To simply reality, the model is based on the following assumptions:,
We then record in each time period the average trust among all individuals, the total number of errors and calculate firm-level productivity as:
\[ \mathtt{Productivity} =\sum_i^np_i\times\mathbf{1}_i^e\times\overline{trust} \] Where \(p_i\) is individual i’s productivity, \(\mathbf{1}_i^e = 1\) if individual i doesn’t make a mistake, and equals 0 otherwise, and \(\overline{trust}\) is the average trust across the entire company.
The base model update rules are as follows
\[ \texttt{trust}_{t+1} = \texttt{trust}*r\\\\ p(\texttt{error}_{t+1}) = \texttt{error}*l \]
| Parameters | Value |
|---|---|
| The cost of error to productivity | Binary (0,1) |
| Initial value of trust (t) | 0.1 0.5 0.9 |
| Number of employees (n) | 50 |
| Error reporting reward (b) | 0.5 1 10 |
| Individual probability of error (pm) | 0.1 |
| Probability an error is observed (po) | 0.2 |
| Reduced trust due to getting reported (r) | 0.1 0.5 |
| Reduced error rate due to learning (l) | 0.1 |
Figures 2 through 5 show the results from our base model. In the base model, agents who are found to have made a mistake both lose trust in others (by either 10 percent or 50 percent), and reduce their rate of errors by 10 percent. Figures 1 through 3 show the 18 different parameter specifications. The different columns represent different reward amounts, the different rows represent the different reductions in trust. Figure 1 represents an initial trust level of .1; Figure 2 represents an initial trust level of .5; and Figure 3 represents an initial trust level of .9.
For most models, we run 5 different simulations with 100 time steps. The average of the five simulations is reported here.
Figure 2: Effect of Reduced Trust and Reward Over Time, Low Initial Trust Environment
Figure 3: Effect of Reduced Trust and Reward Over Time, Moderate Initial Trust Environment
Figure 4: Effect of Reduced Trust and Reward Over Time, High Initial Trust Environment
Because we define productivity to be a function of both error rates and trust, we see that overall productivity in figure 1 is always very low, with productivity going to zero in the case where trust is reduced by 50 percent every time an agent is found to have made a mistake. In figures 2 and 3, we can more clearly see a gradual decrease in both the number of errors and the overall level of organizational productivity.
Figure 5, which shows average trust over time for all 18 parameter configurations in the base model, demonstrates that, without any mechanism to recover trust, trust always decreases over time — in some cases, rather dramatically.
Figure 5: Trust Over Time, All Parameter settings
Across all four of these initial figures, we see that the amount of the reward doesn’t significantly impact the overall macro-level outcomes. In all cases errors decrease (and therefore, we infer, reported), but so too does productivity.
We ran four alternative specifications of the base model, and we show a representative number of figures to demonstrate how these specifications change the results from the base model.
In figure 6, we show the specification where agents can either learn (decrease their error rate) or decrease trust. Here, we find that productivity decreases (albeit more slowly than in the base model), whereas error rates remain relatively unchanged.
Figure 6: Effect of Reduced Trust or learning, moderate trust environment
In figure 7, we see the scenario where agents only lose trust, and never learn. This specification, unsurprisingly, sees no changes to the overall number of errors, while productivity falls swiftly.
Figure 7: Effect of Reduced Trust only, moderate trust environment
In figure 8, we see the alternative scenario: agents only learn, and trust remains unchanged. In this case, we also ran 1000 time steps, rather than 100 time steps, to account for the slow time period perhaps necessary to increase productivity. Because there are no scenarios where trust is reduced, we’ve changed the format of this figure to reflect all 9 different parameter settings in this case. We see that learning only really happens in the moderate trust scenario. In the high trust scenario, error rates and productivity remains basically unchanged. It seems like in the low trust scenario, learning may happen in the moderate-reward regime. It is also possible that we have made an error.
Figure 8: Effect of Learning only
Finally, in figure 9, we see the scenario in which agents recover trust if they go unreported for a time step. In this case, we also ran 1000 time steps, rather than 100 time steps, to account for the slow time period perhaps necessary to recover trust. As figure 9 shows, when there is a small loss in trust for being reported, productivity recovers quite quickly, while the number of errors is reduced over time. However, with a larger reduction in trust, productivity levels off, but never returns to the initial level.
Figure 9: Effect of Recovered Trust only and learning, moderate trust environment
These five specifications show an interesting set of scenarios when trust, errors, and productivity all interact. Clearly, the base model is the worst case scenario. While the number of errors slowly is reduced, average trust plummets quickly, and so too does productivity. Because productivity is the product of average trust, while errors are individual mistakes, errors can be reduced only slowly while the “environment,” begins to take on an overwhelming negative atmosphere. In our specification, culture matters, even when humans are improving in their processes.
Scenario two, where agents either learn or lose trust doesn’t fare much better, it just delays the process. Without a scenario to recover trust, there isn’t much that can be done. In scenario four, where trust is never lost, and scenario five, where trust is regained, we see what we would hope to see: productivity is improving, while the total number of errors is decreasing, albeit slowly.
It is rather strange that, at least in our model, the reward doesn’t seem to matter much. It’s possible that our utility function is too simplistic. We argue that if trust is high, agents won’t find much utility in reporting errors. This makes it extremely unlikely that errors are reported in high-trust environments, and extremely likely that errors would be reported in low trust environments, making the scale of the reward perhaps insignificant. Additionally, we don’t include a cost to reporting. Perhaps reporting might come with negative repercussions, or multiple reports might come with negative repercussions, making for a more complex interplay over time. Alternatively, it’s possible that our scaling function (to go from a utility function to a probability), is to blame. We tried a number of scaling functions (sigmoid, hyperbolic tangent, and simple rescaling), and none seemed to make much of a difference.
In all, this simple model of error reporting is the starting point for a richer possible model. In an environment where trust and errors intermix to create a productive work environment, management (or whomever is trying to encourage the reporting of errors) needs to bear in mind that creating a culture of reporting may also create a culture of fear. This may differ by work environment. We’re imaging a scenario in which workers are making errors themselves, rather than a manufacturing environment where things might go wrong at the fault of nobody in particular.
As stated in the discussion, this model provides a rich basis for future expansion. We believe that there are an additional number of parameters that could be interesting to include or tweak. We mention them all here briefly. First, we currently define mistakes as binary: they either happen or they do not. More realistically, errors vary in scope or size. Some errors are big problems, and should be reported, others are small, and, if reported, may be an act of bad faith. Second, and relatedly, we don’t include any cost to reporting. Perhaps reporting should be infrequent, or should only happen when the error is big enough. We instead make reporting probabilistic. While that might be realistic (you don’t catch every mistake a coworker makes!), it lacks a richness to consider when reporting should happen. Third, it’s clear that there is a necessity to recover trust. Our simple mechanism — reversion to the mean — is perhaps a good starting point. Yet it is also conceivable that trust is a function of the firm’s productivity; in a successful firm, people may be more trusting, more forgiving, and less likely to point out when things are going wrong, since it might not matter all that much. Fourth, some agents may make more mistakes than others. While learning and losing trust are two possible scenarios, a third, being fired — or a fourth, choosing to quit — may better represent firm dynamics. Finally, fifth, we currently have a firm with no hierarchy; agents “report” other agents (to whom, we don’t know), and all errors are the same. Adding in a network representation of the firm where agents report to a particular superior for possible changes may be more realistic as well.
Simon, Herbert A. Administrative Behavior: A Study of Decision-Making Processes in Administrative Organization. 3rd ed. New York: London: Free Press ; Collier Macmillan Publishers, 1976.↩︎
Katz, Daniel, and Robert L. Kahn. The Social Psychology of Organizations. 2d ed. New York: Wiley, 1978.↩︎
Collins, James C., and Jerry I. Porras. Built to Last: Successful Habits of Visionary Companies. 24. Dr. New York: Collins, 2009. ↩︎
Khurana, Rakesh. Searching for a Corporate Savior: The Irrational Quest for Charismatic CEOs. Princeton, NJ: Princeton Univ. Press, 2002.↩︎
See Taylor, Frederick Winslow. The Principles of Scientific Management. Mineola, N.Y: Dover Publications, 1997.↩︎